A general, {\em rectangular} kernel matrix may be defined as $K_{ij} = \kappa(x_i,y_j)$ where $\kappa(x,y)$ is a kernel function and where $X=\{x_i\}_{i=1}^m$ and $Y=\{y_i\}_{i=1}^n$ are two sets of points. In this paper, we seek a low-rank approximation to a kernel matrix where the sets of points $X$ and $Y$ are large and are not well-separated (e.g., the points in $X$ and $Y$ may be ``intermingled''). Such rectangular kernel matrices may arise, for example, in Gaussian process regression where $X$ corresponds to the training data and $Y$ corresponds to the test data. In this case, the points are often high-dimensional. Since the point sets are large, we must exploit the fact that the matrix arises from a kernel function, and avoid forming the matrix, and thus ruling out most algebraic techniques. In particular, we seek methods that can scale linearly, i.e., with computational complexity $O(m)$ or $O(n)$ for a fixed accuracy or rank. The main idea in this paper is to {\em geometrically} select appropriate subsets of points to construct a low rank approximation. An analysis in this paper guides how this selection should be performed.
translated by 谷歌翻译
许多现代机器学习算法,例如生成的对抗网络(GANS)和对抗性培训可以制定为最低限度优化。梯度下降上升(GDA)是由于其简单性导致的最常用的算法。但是,GDA可以收敛到非最佳Minimax点。我们提出了一个新的最低限度优化框架GDA-AM,将GDadynamics视为固定点迭代,并使用Anderson混合来解决局部imemax。它解决了同时GDA的发散问题加速了交替GDA的收敛性。我们从理论上显示了该算法可以在温和条件下实现Bilinear问题的全局收敛性。我们还经验证明GDA-AMSOLVES各种极少问题,并改善了几个数据集的GaN训练
translated by 谷歌翻译
The scalability problem has been one of the most significant barriers limiting the adoption of blockchains. Blockchain sharding is a promising approach to this problem. However, the sharding mechanism introduces a significant number of cross-shard transactions, which are expensive to process. This paper focuses on the transaction allocation problem to reduce the number of cross-shard transactions for better scalability. In particular, we systematically formulate the transaction allocation problem and convert it to the community detection problem on a graph. A deterministic and fast allocation scheme TxAllo is proposed to dynamically infer the allocation of accounts and their associated transactions. It directly optimizes the system throughput, considering both the number of cross-shard transactions and the workload balance among shards. We evaluate the performance of TxAllo on an Ethereum dataset containing over 91 million transactions. Our evaluation results show that for a blockchain with 60 shards, TxAllo reduces the cross-shard transaction ratio from 98% (by using traditional hash-based allocation) to about 12%. In the meantime, the workload balance is well maintained. Compared with other methods, the execution time of TxAllo is almost negligible. For example, when updating the allocation every hour, the execution of TxAllo only takes 0.5 seconds on average, whereas other concurrent works, such as BrokerChain (INFOCOM'22) leveraging the classic METIS method, require 422 seconds.
translated by 谷歌翻译
The performance of decision policies and prediction models often deteriorates when applied to environments different from the ones seen during training. To ensure reliable operation, we propose and analyze the stability of a system under distribution shift, which is defined as the smallest change in the underlying environment that causes the system's performance to deteriorate beyond a permissible threshold. In contrast to standard tail risk measures and distributionally robust losses that require the specification of a plausible magnitude of distribution shift, the stability measure is defined in terms of a more intuitive quantity: the level of acceptable performance degradation. We develop a minimax optimal estimator of stability and analyze its convergence rate, which exhibits a fundamental phase shift behavior. Our characterization of the minimax convergence rate shows that evaluating stability against large performance degradation incurs a statistical cost. Empirically, we demonstrate the practical utility of our stability framework by using it to compare system designs on problems where robustness to distribution shift is critical.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.
translated by 谷歌翻译
我们介绍了NLP社区Metasurvey的结果。从2022年5月到2022年6月,该调查引起了关于有争议的问题的意见,包括该领域的行业影响,对AGI和道德规范的关注。我们的结果将具体数字置于几个争议中:例如,受访者几乎完全将有关人工通用智能的重要性的问题分为一半,语言模型是否理解语言以及语言结构的必要性以及解决NLP问题的必要性。此外,调查提出了元问题,要求受访者预测调查响应的分布。这不仅使我们不仅可以深入了解NLP研究人员所拥有的各种信念,还可以揭示社区预测与现实不符的错误社会学信念。我们在各种问题上发现这种不匹配。除其他结果外,社区大大高估了其对基准的实用性的信念,以及扩展解决现实世界中问题的潜力,同时低估了其对语言结构,归纳偏见和跨学科科学重要性的信念。
translated by 谷歌翻译
嘈杂的频道模型在神经机翻译(NMT)中特别有效。然而,最近的方法如“波束搜索和重新划分”(BSR)在推理期间引起了大量的计算开销,使实际应用不可行。我们的目标是建立一个摊销嘈杂的频道NMT模型,使得从它贪婪解码将生成转换,以最大化与使用BSR生成的翻译相同的奖励。我们尝试三种方法:知识蒸馏,1阶梯偏差仿制学习和Q学习。第一方法获得来自伪语料库的噪声信道信号,后两种方法旨在直接针对嘈杂的通道MT奖励优化。所有三种级别的速度推动速度推断为1-2级。对于所有三种方法,所生成的翻译无法实现与BSR相当的奖励,但BLEU近似的翻译质量类似于BSR产生的翻译的质量。
translated by 谷歌翻译
为了实现长文档理解的构建和测试模型,我们引入质量,具有中文段的多项选择QA DataSet,具有约5,000个令牌的平均长度,比典型的当前模型更长。与经过段落的事先工作不同,我们的问题是由阅读整个段落的贡献者编写和验证的,而不是依赖摘要或摘录。此外,只有一半的问题是通过在紧缩时间限制下工作的注释器来应答,表明略读和简单的搜索不足以一直表现良好。目前的模型在此任务上表现不佳(55.4%),并且落后于人类性能(93.5%)。
translated by 谷歌翻译
With the fast development of big data, it has been easier than before to learn the optimal decision rule by updating the decision rule recursively and making online decisions. We study the online statistical inference of model parameters in a contextual bandit framework of sequential decision-making. We propose a general framework for online and adaptive data collection environment that can update decision rules via weighted stochastic gradient descent. We allow different weighting schemes of the stochastic gradient and establish the asymptotic normality of the parameter estimator. Our proposed estimator significantly improves the asymptotic efficiency over the previous averaged SGD approach via inverse probability weights. We also conduct an optimality analysis on the weights in a linear regression setting. We provide a Bahadur representation of the proposed estimator and show that the remainder term in the Bahadur representation entails a slower convergence rate compared to classical SGD due to the adaptive data collection.
translated by 谷歌翻译